1 |
Visualization of Decision Processes Using a Cognitive Architecture
|
|
|
|
In: DTIC (2013)
|
|
BASE
|
|
Show details
|
|
2 |
Toward Determining the Comprehensibility of Machine Translations
|
|
|
|
In: DTIC (2012)
|
|
BASE
|
|
Show details
|
|
4 |
Cognitive Tools for Humanoid Robots in Space
|
|
|
|
In: DTIC AND NTIS (2004)
|
|
BASE
|
|
Show details
|
|
5 |
Finding the FOO: A Pilot Study for a Multimodal Interface
|
|
|
|
In: DTIC AND NTIS (2003)
|
|
BASE
|
|
Show details
|
|
6 |
Spatial Language for Human-Robot Dialogs
|
|
|
|
In: DTIC AND NTIS (2003)
|
|
BASE
|
|
Show details
|
|
7 |
An Agent Driven Human-centric Interface for Autonomous Mobile Robots
|
|
|
|
In: DTIC AND NTIS (2003)
|
|
BASE
|
|
Show details
|
|
8 |
Using Spatial Language in a Human-Robot Dialog
|
|
|
|
In: DTIC AND NTIS (2002)
|
|
Abstract:
In conversation, people often use spatial relationships to describe their environment, e.g., "There is a desk in front of me and a doorway behind it", and to issue directives, e.g., "Go around the desk and through the doorway." In our research, we have been investigating the use of spatial relationships to establish a natural communication mechanism between people and robots, in particular, for novice users. In this paper, the work on robot spatial relationships is combined with a multi-modal robot interface developed at the Naval Research Lab. We show how linguistic spatial descriptions and other spatial information can be extracted from an evidence grid map and how this information can be used in a natural, human-robot dialog. ; Prepared in collaboration with University of Missouri, Columbia, MO. Sponsored in part by Office of Naval Research (ONR). The original document contains color images.
|
|
Keyword:
*HUMANOID ROBOTS; *HUMANS; *NATURAL LANGUAGE; *ROBOTS; *SOCIAL COMMUNICATION; *SPATIAL LANGUAGE; *SPATIAL RELATIONSHIPS; *VOICE COMMUNICATIONS; Cybernetics; GRIDS(COORDINATES); LINGUISTICS; MAN COMPUTER INTERFACE; MULTI-MODAL INTERFACE
|
|
URL: http://www.dtic.mil/docs/citations/ADA434976 http://oai.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA434976
|
|
BASE
|
|
Hide details
|
|
9 |
Multi-modal Interfacing for Human-Robot Interaction
|
|
|
|
In: DTIC AND NTIS (2001)
|
|
BASE
|
|
Show details
|
|
10 |
Using a Natural Language and Gesture Interface for Unmanned Vehicles
|
|
|
|
In: DTIC AND NTIS (2000)
|
|
BASE
|
|
Show details
|
|
11 |
Goal Tracking and Goal Attainment: A Natural Language Means of Achieving Adjustable Autonomy
|
|
|
|
In: DTIC AND NTIS (1999)
|
|
BASE
|
|
Show details
|
|
12 |
Goal Tracking in a Natural Language Interface: Towards Achieving Adjustable Autonomy
|
|
|
|
In: DTIC AND NTIS (1999)
|
|
BASE
|
|
Show details
|
|
13 |
Integrating Natural Language and Gesture in a Robotics Domain
|
|
|
|
In: DTIC AND NTIS (1998)
|
|
BASE
|
|
Show details
|
|
16 |
Talking to InterFIS: Adding Speech Input to a Natural Language Interface
|
|
|
|
In: DTIC AND NTIS (1992)
|
|
BASE
|
|
Show details
|
|
18 |
InterFIS: A Natural Language Interface to the Fault Isolation Shell
|
|
|
|
In: DTIC AND NTIS (1990)
|
|
BASE
|
|
Show details
|
|
|
|